22 research outputs found

    Analyse, reconnaissance et réalisation des gestes pour l'entraînement en chirurgie laparoscopique robotisée

    Get PDF
    Integration of robotic systems in the operating room changed the way that surgeries are performed. It modifies practices to improve medical benefits for the patient but also brought non-traditional aspects that can lead to serious undesirable effects. Recent studies from the French authorities for hygiene and medical care highlight that these undesirable effects mainly come from the surgeon's technical skills, which question surgical robotic training and teaching. To overcome this issue, surgical simulators help to train practitioner through different training tasks and provide feedback to the operator. However the feedback is partial and do not help the surgeon to understand gestural mistakes. Thus, we want to improve the surgical robotic training conditions. The objective of this work is twofold. First, we developed a new method for segmentation and recognition of surgical gestures during training sessions based on an unsupervised approach. From surgical tools kinematic data, we are able to achieve gesture recognition at 82%. This method is a first step to evaluate technical skills based on gestures and not on the global execution of the task as it is done nowadays. The second objective is to provide easier access to surgical training and make it cheaper. To do so, we studied a new contactless human-machine interface to control surgical robots. In this work, the interface is plugged to a Raven-II robot dedicated to surgical robotics research. Then, we evaluated performance of such system through multiple studies, concluding that this interface can be used to control surgical robots. In the end, one can consider to use this contactless interface for surgical training with a simulator. It can reduce the training cost and also improve the access for novice surgeons to technical skills training dedicated to surgical robotics.L'intégration de systèmes robotiques au sein du bloc opératoire a modifié le déroulement de certaines interventions, laissant ainsi place à des pratiques favorisant le bénéfice médical rendu au patient en dépit des aspects conventionnels. Dans ce cadre, de récentes études de la Haute Autorité de Santé ont mis en avant les effets indésirables graves intervenant au cours des procédures chirurgicales robotisées. Ces erreurs, majoritairement dues aux compétences techniques du praticien, remettent ainsi en cause la formation et les techniques d'apprentissage pour la chirurgie robotisée. Bien que l'utilisation abondante de simulateurs facilite cet apprentissage au travers différents types d'entraînement, le retour fourni à l'opérateur reste succinct et ne lui permet pas de progresser dans de bonnes conditions. De ce fait, nous souhaitons améliorer les conditions d'entraînement en chirurgie laparoscopique robotisée. Les objectifs de cette thèse sont doubles. En premier lieu, ils visent le développement d'une méthode pour la segmentation et la reconnaissance des gestes chirurgicaux durant l'entraînement en se basant sur une approche non-supervisée. Utilisant les données cinématiques des instruments chirurgicaux, nous sommes capables de reconnaître les gestes réalisés par l'opérateur à hauteur de 82%. Cette méthode est alors une première étape pour l'évaluation de compétences basée sur la gestuelle et non sur l'ensemble de la tâche d'entraînement. D'autre part, nous souhaitons rendre l'entraînement en chirurgie robotisée plus accessible et moins coûteux. De ce fait, nous avons également étudié l'utilisation d'une nouvelle interface homme-machine sans contact pour la commande des robots chirurgicaux. Dans ces travaux, cette interface a été couplée au Raven-II, un robot de téléopération dédié à la recherche en robotique chirurgicale. Nous avons alors évalué les performances du système au travers différentes études, concluant ainsi à la possibilité de téléopérer un robot chirurgical avec ce type de dispositif. Il est donc envisageable d'utiliser ce type d'interface pour l'entraînement sur simulateur afin de réduire le coût de la formation, mais également d'améliorer l'accès et l'acquisition des compétences techniques spécifiques à la chirurgie robotisée

    Unsupervised Trajectory Segmentation for Surgical Gesture Recognition in Robotic Training

    No full text
    International audienceDexterity and procedural knowledge are two critical skills that surgeons need to master to perform accurate and safe surgical interventions. However, current training systems do not allow us to provide an in-depth analysis of surgical gestures to precisely assess these skills. Our objective is to develop a method for the automatic and quantitative assessment of surgical gestures. To reach this goal, we propose a new unsupervised algorithm that can automatically segment kinematic data from robotic training sessions. Without relying on any prior information or model, this algorithm detects critical points in the kinematic data that define relevant spatio-temporal segments. Based on the association of these segments, we obtain an accurate recognition of the gestures involved in the surgical training task. We, then, perform an advanced analysis and assess our algorithm using datasets recorded during real expert training sessions. After comparing our approach with the manual annotations of the surgical gestures, we observe 97.4% accuracy for the learning purpose and an average matching score of 81.9% for the fully automated gesture recognition process. Our results show that trainees workflow can be followed and surgical gestures may be automatically evaluated according to an expert database. This approach tends toward improving training efficiency by minimizing the learning curve

    Evaluation of contactless human–machine interface for robotic surgical training

    Get PDF
    Purpose Teleoperated robotic systems are nowadays routinely used for specific interventions. Benefits of robotic training courses have already been acknowledged by the community since manipulation of such systems requires dedicated training. However, robotic surgical simulators remain expensive and require a dedicated human–machine interface. Methods We present a low-cost contactless optical sensor, the Leap Motion, as a novel control device to manipulate the RAVEN-II robot. We compare peg manipulations during a training task with a contact-based device, the electro-mechanical Sigma.7. We perform two complementary analyses to quantitatively assess the performance of each control method: a metric-based comparison and a novel unsupervised spatiotemporal trajectory clustering. Results We show that contactless control does not offer as good manipulability as the contact-based. Where part of the metric-based evaluation presents the mechanical control better than the contactless one, the unsupervised spatiotemporal trajectory clustering from the surgical tool motions highlights specific signature inferred by the human–machine interfaces. Conclusions Even if the current implementation of contactless control does not overtake manipulation with high-standard mechanical interface, we demonstrate that using the optical sensor complete control of the surgical instruments is feasible. The proposed method allows fine tracking of the trainee’s hands in order to execute dexterous laparoscopic training gestures. This work is promising for development of future human–machine interfaces dedicated to robotic surgical training systems

    Gesture analysis, recognition and execution for surgical robotic training

    No full text
    L'intégration de systèmes robotiques au sein du bloc opératoire a modifié le déroulement de certaines interventions, laissant ainsi place à des pratiques favorisant le bénéfice médical rendu au patient en dépit des aspects conventionnels. Dans ce cadre, de récentes études de la Haute Autorité de Santé ont mis en avant les effets indésirables graves intervenant au cours des procédures chirurgicales robotisées. Ces erreurs, majoritairement dues aux compétences techniques du praticien, remettent ainsi en cause la formation et les techniques d'apprentissage pour la chirurgie robotisée. Bien que l'utilisation abondante de simulateurs facilite cet apprentissage au travers différents types d'entraînement, le retour fourni à l'opérateur reste succinct et ne lui permet pas de progresser dans de bonnes conditions. De ce fait, nous souhaitons améliorer les conditions d'entraînement en chirurgie laparoscopique robotisée. Les objectifs de cette thèse sont doubles. En premier lieu, ils visent le développement d'une méthode pour la segmentation et la reconnaissance des gestes chirurgicaux durant l'entraînement en se basant sur une approche non-supervisée. Utilisant les données cinématiques des instruments chirurgicaux, nous sommes capables de reconnaître les gestes réalisés par l'opérateur à hauteur de 82%. Cette méthode est alors une première étape pour l'évaluation de compétences basée sur la gestuelle et non sur l'ensemble de la tâche d'entraînement. D'autre part, nous souhaitons rendre l'entraînement en chirurgie robotisée plus accessible et moins coûteux. De ce fait, nous avons également étudié l'utilisation d'une nouvelle interface homme-machine sans contact pour la commande des robots chirurgicaux. Dans ces travaux, cette interface a été couplée au Raven-II, un robot de téléopération dédié à la recherche en robotique chirurgicale. Nous avons alors évalué les performances du système au travers différentes études, concluant ainsi à la possibilité de téléopérer un robot chirurgical avec ce type de dispositif. Il est donc envisageable d'utiliser ce type d'interface pour l'entraînement sur simulateur afin de réduire le coût de la formation, mais également d'améliorer l'accès et l'acquisition des compétences techniques spécifiques à la chirurgie robotisée.Integration of robotic systems in the operating room changed the way that surgeries are performed. It modifies practices to improve medical benefits for the patient but also brought non-traditional aspects that can lead to serious undesirable effects. Recent studies from the French authorities for hygiene and medical care highlight that these undesirable effects mainly come from the surgeon's technical skills, which question surgical robotic training and teaching. To overcome this issue, surgical simulators help to train practitioner through different training tasks and provide feedback to the operator. However the feedback is partial and do not help the surgeon to understand gestural mistakes. Thus, we want to improve the surgical robotic training conditions. The objective of this work is twofold. First, we developed a new method for segmentation and recognition of surgical gestures during training sessions based on an unsupervised approach. From surgical tools kinematic data, we are able to achieve gesture recognition at 82%. This method is a first step to evaluate technical skills based on gestures and not on the global execution of the task as it is done nowadays. The second objective is to provide easier access to surgical training and make it cheaper. To do so, we studied a new contactless human-machine interface to control surgical robots. In this work, the interface is plugged to a Raven-II robot dedicated to surgical robotics research. Then, we evaluated performance of such system through multiple studies, concluding that this interface can be used to control surgical robots. In the end, one can consider to use this contactless interface for surgical training with a simulator. It can reduce the training cost and also improve the access for novice surgeons to technical skills training dedicated to surgical robotics

    Discovering Discriminative and Interpretable Patterns for Surgical Motion Analysis

    No full text
    International audienceThe analysis of surgical motion has received a growing interest with the development of devices allowing their automatic capture. In this context, the use of advanced surgical training systems make an automated assessment of surgical trainee possible. Automatic and quantitative evaluation of surgical skills is a very important step in improving surgical patient care. In this paper, we present a novel approach for the discovery and ranking of discriminative and interpretable patterns of surgical practice from recordings of surgical motions. A pattern is defined as a series of actions or events in the kinematic data that together are distinctive of a specific gesture or skill level. Our approach is based on the discretization of the continuous kinematic data into strings which are then processed to form bags of words. This step allows us to apply discriminative pattern mining technique based on the word occurrence frequency. We show that the patterns identified by the proposed technique can be used to accurately classify individual gestures and skill levels. We also present how the patterns provide a detailed feedback on the trainee skill assessment. Experimental evaluation performed on the publicly available JIGSAWS dataset shows that the proposed approach successfully classifies gestures and skill levels. © Springer International Publishing AG 2017

    Towards unified dataset for Modeling and Monitoring of Computer Assisted Medical Interventions

    No full text
    International audienceM2CAMI is a working group dedicated to Modeling and Monitoring of Computer Assisted Medical Interventions within the CAMI Labex. It aims at unifying data acquired from different surgical trainers and procedures for collaborative research. In this paper , we propose a generic structure for multi-modal dataset that allows faster development and easier processing. With such formalization, our objective is to go beyond the state of the art by sharing various types of data between international institutions and merge methodological approaches for better detection and understanding of surgical workflows

    Real-time phase recognition in novel needle-based intervention: a multi-operator feasibility study

    No full text
    International audiencePurpose. One of the goals of new navigation systems in the operating room and in outpatient clinics is to support the surgeon's decision making while minimizing the additional load on surrounding health personnel. To do so, the system needs to rely on context-awareness providing the surgeon with the most relevant visualization at all times. Such a system could also provide support for surgical training. The objective of this work is to assess the feasibility of an automatic surgical phase recognition using tracking data from a novel instrument for injections and biopsies. Methods. An injection into the sphenopalatine ganglion planned with MRI and CT images is carried out using optical tracking of the instrument. In the context of a feasibility study, the intervention was performed by 5 operators, each 5 times, on a specially designed phantom. The coordinate information is processed into 7 features characterizing the intervention. Three classifiers, Hidden Markov Model (HMM), a Support Vector Machine (SVM), and a combination of these (SVM+HMM) are trained on manually annotated data and cross-validated for intra-and inter-operator variability. Standard test metrics are used to compare the performance of each classifier. Results. HMM alone and SVM alone are comparable classifiers, but feeding the output of the SVM into an HMM results in significantly better classifications: accuracy of 97.8%, sensitivity of 93.1% and specificity of 98.4%. Conclusion. The use of trajectory information can provide a robust real-time phase recognition of surgical phases for needle-based interventions

    Towards unified dataset for Modeling and Monitoring of Computer Assisted Medical Interventions

    No full text
    International audienceM2CAMI is a working group dedicated to Modeling and Monitoring of Computer Assisted Medical Interventions within the CAMI Labex. It aims at unifying data acquired from different surgical trainers and procedures for collaborative research. In this paper , we propose a generic structure for multi-modal dataset that allows faster development and easier processing. With such formalization, our objective is to go beyond the state of the art by sharing various types of data between international institutions and merge methodological approaches for better detection and understanding of surgical workflows

    Comparative Assessment of a Novel Optical Human-Machine Interface for Laparoscopic Telesurgery

    No full text
    International audienceThis paper introduces a novel type of human-machine in-terface for laparoscopic telesurgery that employs an optical sensor. A Raven-II laparascopic robot (Applied Dexterity Inc) was teleoperated us-ing two different human-machine interfaces, namely the Sigma 7 electro-mechanical device (Force Dimension Sarl) and the Leap Motion (Leap Motion Inc) infrared stereoscopic camera. Based on this hardware plat-form, a comparative study of both systems was performed through ob-jective and subjective metrics, which were obtained from a population of 10 subjects. The participants were asked to perform a peg transferring task and to answer a questionnaire. Obtained results allow to confirm that fine tracking of the hand could be performed with the Leap Mo-tion sensor. Such tracking comprises accurate finger motion acquisition to control the robot's laparoscopic instrument jaws. Furthermore, the observed performance of the optical interface proved to be comparable to that of traditional electro-mechanical devices, such as the Sigma 7, during adequate execution of highly-dexterous laparascopic gestures
    corecore